Learning text-video embeddings usually requires a dataset of video clips with manually provided captions. However, such datasets are expensive and time consuming to create and therefore difficult to obtain on a large scale. In this work, we propose instead to learn such embeddings from video data with readily available natural language annotations in the form of automatically transcribed narrations. The contributions of this work are three-fold. First, we introduce HowTo100M: a large-scale dataset of 136 million video clips sourced from 1.22M narrated instructional web videos depicting humans performing and describing over 23k different visual tasks. Our data collection procedure is fast, scalable and does not require any additional manual annotation. Second, we demonstrate that a text-video embedding trained on this data leads to state-ofthe-art results for text-to-video retrieval and action localization on instructional video datasets such as YouCook2 or CrossTask. Finally, we show that this embedding transfers well to other domains: fine-tuning on generic Youtube videos (MSR-VTT dataset) and movies (LSMDC dataset) outperforms models trained on these datasets alone. Our dataset, code and models are publicly available [1]. * Equal contribution.
translated by 谷歌翻译
We provide a unifying approximate dynamic programming framework that applies to a broad variety of problems involving sequential estimation. We consider first the construction of surrogate cost functions for the purposes of optimization, and we focus on the special case of Bayesian optimization, using the rollout algorithm and some of its variations. We then discuss the more general case of sequential estimation of a random vector using optimal measurement selection, and its application to problems of stochastic and adaptive control. We finally consider related search and sequential decoding problems, and a rollout algorithm for the approximate solution of the Wordle and Mastermind puzzles, recently developed in the paper [BBB22].
translated by 谷歌翻译
In 2021 300 mm of rain, nearly half the average annual rainfall, fell near Catania (Sicily island, Italy). Such events took place in just a few hours, with dramatic consequences on the environmental, social, economic, and health systems of the region. This is the reason why, detecting extreme rainfall events is a crucial prerequisite for planning actions able to reverse possibly intensified dramatic future scenarios. In this paper, the Affinity Propagation algorithm, a clustering algorithm grounded on machine learning, was applied, to the best of our knowledge, for the first time, to identify excess rain events in Sicily. This was possible by using a high-frequency, large dataset we collected, ranging from 2009 to 2021 which we named RSE (the Rainfall Sicily Extreme dataset). Weather indicators were then been employed to validate the results, thus confirming the presence of recent anomalous rainfall events in eastern Sicily. We believe that easy-to-use and multi-modal data science techniques, such as the one proposed in this study, could give rise to significant improvements in policy-making for successfully contrasting climate changes.
translated by 谷歌翻译
The most popular methods and algorithms for AI are, for the vast majority, black boxes. Black boxes can be an acceptable solution to unimportant problems (in the sense of the degree of impact) but have a fatal flaw for the rest. Therefore the explanation tools for them have been quickly developed. The evaluation of their quality remains an open research question. In this technical report, we remind recently proposed post-hoc explainers FEM and MLFEM which have been designed for explanations of CNNs in image and video classification tasks. We also propose their evaluation with reference-based and no-reference metrics. The reference-based metrics are Pearson Correlation coefficient and Similarity computed between the explanation maps and the ground truth, which is represented by Gaze Fixation Density Maps obtained due to a psycho-visual experiment. As a no-reference metric we use "stability" metric, proposed by Alvarez-Melis and Jaakkola. We study its behaviour, consensus with reference-based metrics and show that in case of several kind of degradations on input images, this metric is in agreement with reference-based ones. Therefore it can be used for evaluation of the quality of explainers when the ground truth is not available.
translated by 谷歌翻译
Because of the considerable heterogeneity and complexity of the technological landscape, building accurate models to forecast is a challenging endeavor. Due to their high prevalence in many complex systems, S-curves are a popular forecasting approach in previous work. However, their forecasting performance has not been directly compared to other technology forecasting approaches. Additionally, recent developments in time series forecasting that claim to improve forecasting accuracy are yet to be applied to technological development data. This work addresses both research gaps by comparing the forecasting performance of S-curves to a baseline and by developing an autencoder approach that employs recent advances in machine learning and time series forecasting. S-curves forecasts largely exhibit a mean average percentage error (MAPE) comparable to a simple ARIMA baseline. However, for a minority of emerging technologies, the MAPE increases by two magnitudes. Our autoencoder approach improves the MAPE by 13.5% on average over the second-best result. It forecasts established technologies with the same accuracy as the other approaches. However, it is especially strong at forecasting emerging technologies with a mean MAPE 18% lower than the next best result. Our results imply that a simple ARIMA model is preferable over the S-curve for technology forecasting. Practitioners looking for more accurate forecasts should opt for the presented autoencoder approach.
translated by 谷歌翻译
We derive a learning framework to generate routing/pickup policies for a fleet of vehicles tasked with servicing stochastically appearing requests on a city map. We focus on policies that 1) give rise to coordination amongst the vehicles, thereby reducing wait times for servicing requests, 2) are non-myopic, considering a-priori unknown potential future requests, and 3) can adapt to changes in the underlying demand distribution. Specifically, we are interested in adapting to fluctuations of actual demand conditions in urban environments, such as on-peak vs. off-peak hours. We achieve this through a combination of (i) online play, a lookahead optimization method that improves the performance of rollout methods via an approximate policy iteration step, and (ii) an offline approximation scheme that allows for adapting to changes in the underlying demand model. In particular, we achieve adaptivity of our learned policy to different demand distributions by quantifying a region of validity using the q-valid radius of a Wasserstein Ambiguity Set. We propose a mechanism for switching the originally trained offline approximation when the current demand is outside the original validity region. In this case, we propose to use an offline architecture, trained on a historical demand model that is closer to the current demand in terms of Wasserstein distance. We learn routing and pickup policies over real taxicab requests in downtown San Francisco with high variability between on-peak and off-peak hours, demonstrating the ability of our method to adapt to real fluctuation in demand distributions. Our numerical results demonstrate that our method outperforms rollout-based reinforcement learning, as well as several benchmarks based on classical methods from the field of operations research.
translated by 谷歌翻译
In this paper we address the solution of the popular Wordle puzzle, using new reinforcement learning methods, which apply more generally to adaptive control of dynamic systems and to classes of Partially Observable Markov Decision Process (POMDP) problems. These methods are based on approximation in value space and the rollout approach, admit a straightforward implementation, and provide improved performance over various heuristic approaches. For the Wordle puzzle, they yield on-line solution strategies that are very close to optimal at relatively modest computational cost. Our methods are viable for more complex versions of Wordle and related search problems, for which an optimal strategy would be impossible to compute. They are also applicable to a wide range of adaptive sequential decision problems that involve an unknown or frequently changing environment whose parameters are estimated on-line.
translated by 谷歌翻译
预测拍摄图片的国家有许多潜在的应用,例如对虚假索赔,冒名顶替者的识别,预防虚假信息运动,对假新闻的识别等等。先前的作品主要集中在拍摄图片的地理坐标的估计上。然而,从语义和法医学的角度来看,认识到已经拍摄图像的国家可能更重要,而不是确定其空间坐标。到目前为止,只有少数作品已经解决了这项任务,主要是依靠包含特征地标的图像,例如标志性的纪念碑。在上面的框架中,本文提供了两个主要贡献。首先,我们介绍了一个新的数据集,即Vippgeo数据集,其中包含近400万张图像,可用于训练DL模型进行国家分类。该数据集仅包含这种图像与国家识别的相关性,并且它是通过注意删除非显着图像(例如描绘面孔的图像或特定的非相关物体,例如飞机或船舶)来构建的。其次,我们使用数据集来训练深度学习架构,以将国家识别问题视为分类问题。我们执行的实验表明,我们的网络提供了比当前最新状态更好的结果。特别是,我们发现,要求网络直接识别该国提供比首先估算地理配位的更好的结果,然后使用它们将其追溯到拍摄图片的国家。
translated by 谷歌翻译
即使有效,模型的使用也必须伴随着转换数据的各个级别的理解(上游和下游)。因此,需求增加以定义单个数据与算法可以根据其分析可以做出的选择(例如,一种产品或一种促销报价的建议,或代表风险的保险费率)。模型用户必须确保模型不会区分,并且也可以解释其结果。本文介绍了模型解释的重要性,并解决了模型透明度的概念。在保险环境中,它专门说明了如何使用某些工具来强制执行当今可以利用机器学习的精算模型的控制。在一个简单的汽车保险中损失频率估计的示例中,我们展示了一些解释性方法的兴趣,以适应目标受众的解释。
translated by 谷歌翻译
现代工业设施在生产过程中生成大量的原始传感器数据。该数据用于监视和控制过程,可以分析以检测和预测过程异常。通常,数据必须由专家注释,以进一步用于预测建模。当今的大多数研究都集中在需要手动注释数据的无监督异常检测算法或监督方法上。这些研究通常是使用过程模拟器生成的狭窄事件类别的数据进行的,并且在公开可用的数据集上很少验证建议的算法。在本文中,我们提出了一种新型的方法,用于用于工业化学传感器数据的无监督故障检测和诊断。我们根据具有各种故障类型的田纳西州伊士曼进程的两个公开数据集证明了我们的模型性能。结果表明,我们的方法显着优于现有方法(固定FPR的+0.2-0.3 TPR),并在不使用专家注释的情况下检测大多数过程故障。此外,我们进行了实验,以证明我们的方法适用于未提前不知道故障类型数量的现实世界应用。
translated by 谷歌翻译